Skip to content

Fix classification regression and reproduce JPP 2020 paper results#323

Open
krystophny wants to merge 13 commits intomainfrom
fix/classification-regression
Open

Fix classification regression and reproduce JPP 2020 paper results#323
krystophny wants to merge 13 commits intomainfrom
fix/classification-regression

Conversation

@krystophny
Copy link
Copy Markdown
Member

@krystophny krystophny commented Mar 19, 2026

Summary

Fixes multiple classification regressions and extends the classification example to reproduce all results from the JPP 2020 paper (Albert, Kasilov, Kernbichler).

Bug fix: fast_class marking regular orbits as lost (579d8ef)

The fast_class feature propagated the J_parallel classifier completion status (ierr_cot) directly to the integration error flag (ierr), causing ALL classified orbits -- including regular ones -- to be marked as lost. This produced wildly wrong confined fractions (e.g., fc=0.57 instead of fc=0.94 for QI s=0.3 with 100k particles).

The fix sets regular=.True. for J_parallel-classified regular orbits, which correctly: (a) stops integration early, (b) counts the orbit in confpart at all remaining time steps, and (c) records times_lost = trace_time (confined). When class_plot is on, the old termination-at-ntcut behavior is preserved so classification output is written.

Earlier fixes in this branch:

  • Restore volume sampling and bminmax output for classification (81e7d51)
  • Fix transposed axes in classification plot (dba1f1b)
  • Fix crash when fast_class and tcut are both enabled (9880838)
  • Add classification example reproducing JPP 2020 paper results (fc4f856)

Extended classification example reproduces Figures 5-8 and Table 1 using the actual paper equilibria:

  • QI (Drevlak 2014, wout_23_1900_fix_bdry.nc)
  • QH (Drevlak 2018, wout_qh_8_7.nc)
  • QA (Henneberg 2019, wout_henneberg_qa.nc)

Verification

Test passes after fix

$ make test-fast
100% tests passed, 0 tests failed out of 46
Total Test time (real) = 285.98 sec

Confined fractions match paper (1000 particles, same as paper)

QI s=0.3: 1000 particles, 62 lost, f_c = 0.9380 +/- 0.0149
QI s=0.6: 1000 particles, 136 lost, f_c = 0.8640 +/- 0.0212
QH s=0.3: 1000 particles, 7 lost, f_c = 0.9930 +/- 0.0052
QH s=0.6: 1000 particles, 118 lost, f_c = 0.8820 +/- 0.0200
QA s=0.3: 1000 particles, 132 lost, f_c = 0.8680 +/- 0.0210
QA s=0.6: 1000 particles, 303 lost, f_c = 0.6970 +/- 0.0285

Before fix (fast_class bug, 100k particles)

QI s=0.3: 100000 particles, 43175 lost, f_c = 0.5682
QI s=0.6: 100000 particles, 44791 lost, f_c = 0.5521

All trapped particles were marked as lost, giving fc ~ 0.57 (only passing particles counted as confined).

Generated plots

Figure 5 (QI): fig5_losses_qi.pdf
Figure 6 (QH): fig6_losses_qh.pdf
Figure 7 (QA): fig7_losses_qa.pdf
Figure 8 (classification): fig8_classification.pdf
Volume classification: volume_classification.pdf

Test plan

  • make test-fast passes 100% (46/46)
  • Confined fractions match JPP 2020 paper within statistical error
  • Loss plots (Fig 5-7) show correct patterns: late losses near trapped-passing boundary for QI, early prompt losses for QH s=0.6, spread losses for QA
  • Classification plot (Fig 8) shows regular/chaotic/lost separation matching paper
  • Volume classification plot produces correct (s, J_perp) colored grid

@qodo-code-review
Copy link
Copy Markdown

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

Review Summary by Qodo

Restore volume sampling and bminmax output for classification

🐞 Bug fix ✨ Enhancement

Grey Divider

Walkthroughs

Description
• Restore volume sampling for num_surf=0 to enable classification diagnostics
• Fix fast_class and tcut compatibility by removing incorrect mutual exclusivity check
• Generate bminmax.dat output file with 101 rows for classification plotting
• Migrate classification binning from Fortran postprocessor to Python script
• Fix hardcoded array index and VMEC file reference in classification example
Diagram
flowchart LR
  A["startmode=1<br/>num_surf=0"] -->|"volume sampling<br/>uniform s in [0,1]"| B["sample_particles"]
  B --> C["compute_pitch_angle_params"]
  C -->|"get_bminmax<br/>for all num_surf"| D["bmin/bmax arrays"]
  D --> E["write_output"]
  E -->|"generates"| F["bminmax.dat<br/>101 rows"]
  G["class_parts.dat<br/>6 columns"] --> H["plot_classification.py"]
  F --> H
  H -->|"bin in Python"| I["class_jpar.pdf<br/>class_ideal.pdf"]
Loading

Grey Divider

File Changes

1. examples/classification/postprocess_class.f90 Miscellaneous +0/-100

Remove Fortran postprocessor for classification

• Deleted entire Fortran postprocessor file (100 lines)
• Functionality replaced by Python binning in plot_classification.py

examples/classification/postprocess_class.f90


2. src/params.f90 🐞 Bug fix +0/-3

Fix fast_class and tcut compatibility check

• Removed incorrect mutual exclusivity check between fast_class and tcut
• Both classifiers are complementary and can be used together

src/params.f90


3. src/simple_main.f90 🐞 Bug fix +20/-2

Enable volume sampling and bminmax output for classification

• Add volume sampling branch for num_surf=0 case in sample_particles
• Change get_bminmax condition from num_surf > 1 to num_surf /= 1 to include num_surf=0
• Add bminmax.dat output generation with 101 rows when classification is active
• Fix hardcoded array index by using dynamic index lookup for vertical line at s=0.25

src/simple_main.f90


View more (5)
4. examples/classification/plot_classification.py ✨ Enhancement +98/-63

Migrate classification binning from Fortran to Python

• Rewrite script to read class_parts.dat directly (all 6 columns) instead of separate
 Fortran-generated files
• Implement bin_classification function to bin particles in Python replacing Fortran postprocessor
• Fix hardcoded index bminmax[250,1] with dynamic index lookup using np.argmin
• Add main function wrapper and improve code structure

examples/classification/plot_classification.py


5. test/golden_record/compare_golden_results.sh 🧪 Tests +2/-2

Add classifier_combined to golden record tests

• Extend classifier test case handling to include new classifier_combined case
• Update condition to check for both classifier_fast and classifier_combined cases

test/golden_record/compare_golden_results.sh


6. examples/classification/Makefile ⚙️ Configuration changes +3/-12

Simplify Makefile by removing Fortran postprocessor

• Remove Fortran compiler variables and compilation rules for postprocess_class.x
• Simplify dependencies to directly generate class_parts.dat and bminmax.dat from simple.x
• Remove intermediate file targets (prompt*.dat, regular*.dat, stochastic*.dat)

examples/classification/Makefile


7. examples/classification/simple.in 🐞 Bug fix +1/-1

Fix VMEC file reference in classification example

• Fix VMEC file reference from QA to QH to match Makefile download target
• Ensures correct equilibrium file is used for classification example

examples/classification/simple.in


8. test/golden_record/classifier_combined/simple.in 🧪 Tests +15/-0

Add combined classifier golden record test configuration

• New test configuration file for combined classifier test case
• Enables both fast_class=.True. and tcut=1d-2 to exercise complementary classifiers
• Uses 32 test particles with num_surf=0 for volume sampling

test/golden_record/classifier_combined/simple.in


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Mar 19, 2026

Code Review by Qodo

🐞 Bugs (4) 📘 Rule violations (0) 📎 Requirement gaps (0) 📐 Spec deviations (0)

Grey Divider


Action required

1. Plot binning normalization bug 🐞 Bug ✓ Correctness
Description
bin_classification() normalizes perp_inv by its sample max, but doplot_inner() overlays bmin/bmax
boundary curves in a different normalization, so the heatmap bins and boundary curves are on
inconsistent y-scales and the plot can be physically misleading. If all perp_inv values are
identical (e.g., all 0), hp becomes 0 and the script divides by zero when computing bin indices.
Code

examples/classification/plot_classification.py[R10-22]

+    hs = 1.0 / ns
+    pmin = 0.0
+    pmax = np.max(perp_inv)
+    hp = (pmax - pmin) / nperp
+
+    prompt = np.zeros((nperp, ns))
+    regular = np.zeros((nperp, ns))
+    stochastic = np.zeros((nperp, ns))
+
+    for ipart in range(len(s)):
+        i = min(ns, max(1, int(np.ceil(s[ipart] / hs)))) - 1
+        k = min(nperp, max(1, int(np.ceil(perp_inv[ipart] / hp)))) - 1
+
Evidence
The binning scale is defined by pmax=np.max(perp_inv) and hp=(pmax-pmin)/nperp, then k uses
perp_inv/hp, meaning the vertical axis is effectively perp_inv/pmax. However the boundary curves are
plotted as bmin_global/bminmax[:,1|2], which corresponds to (1/bmin_local)/(1/bmin_global) i.e.,
perp_inv*bmin_global, a different normalization unless pmax happens to equal 1/bmin_global;
additionally, when pmax==pmin, hp==0 and perp_inv/hp raises a division-by-zero.

examples/classification/plot_classification.py[5-31]
examples/classification/plot_classification.py[33-56]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`examples/classification/plot_classification.py` bins `perp_inv` using `pmax = np.max(perp_inv)`, but overlays curves computed from `bminmax.dat` that assume normalization by the global minimum B (`bmin_global`). This makes the plotted heatmap and boundary curves inconsistent, and can also crash when `pmax == pmin` (hp=0).

### Issue Context
The boundary curves use `bmin_global / bminmax[:, 1|2]`, which is consistent with plotting `perp_inv * bmin_global` (dimensionless, ideally in [0,1]). The binning should use the same quantity, and must guard against `hp == 0`.

### Fix Focus Areas
- examples/classification/plot_classification.py[5-31]
- examples/classification/plot_classification.py[33-56]

### Suggested approach
- Load `bminmax` first in `main()` and compute `bmin_global`.
- Define `perp_norm = perp_inv * bmin_global` and clip to `[0, 1]` (or set explicit bounds).
- Update `bin_classification()` to bin `perp_norm` in a fixed range `[0,1]` (so `hp = 1.0/nperp`), avoiding `pmax=np.max(...)`.
- Add a defensive guard so `hp` cannot be 0 (even if you keep data-driven pmax).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Classifier skips bminmax for num_surf=0 🐞 Bug ✓ Correctness
Description
compute_pitch_angle_params() now calls get_bminmax when num_surf/=1, but
trace_orbit_with_classifiers() still only calls get_bminmax when num_surf>1. If num_surf remains 0
(e.g., startmode=2 reading a start.dat with varying radii), classification can use stale bmin/bmax
and compute incorrect passing/trap parameters.
Code

src/simple_main.f90[R771-775]

!$omp critical
        bmod = compute_bmod(z(1:3))
-        if (num_surf > 1) then
+        if (num_surf /= 1) then
            call get_bminmax(z(1), bmin, bmax)
        end if
Evidence
After this PR, simple_main uses if (num_surf /= 1) to trigger per-radius bmin/bmax, but
classification.f90 still guards with if(num_surf > 1). This leaves a gap for num_surf=0
workflows where bmin/bmax should vary with radius but won’t be updated inside the classifier’s
passing calculation.

src/simple_main.f90[762-777]
src/classification.f90[148-155]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`src/simple_main.f90` updated the bmin/bmax lookup guard to `num_surf /= 1`, but `src/classification.f90` still uses `num_surf &gt; 1` before calling `get_bminmax`. This can skip per-radius bmin/bmax updates when `num_surf==0`.

### Issue Context
Classifier computations (`passing`, `trap_par`, `perp_inv`) depend on accurate `bmin/bmax`. For `num_surf=0` (volume / mixed-radius starts), bmin/bmax should be obtained via `get_bminmax`.

### Fix Focus Areas
- src/simple_main.f90[762-777]
- src/classification.f90[148-155]

### Suggested approach
- Change the guard in `trace_orbit_with_classifiers` from `if(num_surf &gt; 1)` to `if (num_surf /= 1)` (matching simple_main), or explicitly handle `num_surf==0` as a case that must call `get_bminmax`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. Axis region not sampled 🐞 Bug ✓ Correctness
Description
For startmode=1 with num_surf=0, sample_particles() now calls the volume sampler with bounds [0,1],
but the volume sampler clamps the lower bound to s_min=0.01 so the axis region (s<0.01) is never
sampled. This contradicts the PR description’s stated sampling range and can bias diagnostics near
the axis.
Code

src/simple_main.f90[R428-431]

        if (1 == startmode) then
-            if ((0d0 < grid_density) .and. (1d0 > grid_density)) then
+            if (0 == num_surf) then
+                call sample(zstart, 0.0d0, 1.0d0)
+            else if ((0d0 < grid_density) .and. (1d0 > grid_density)) then
Evidence
The generic sample(zstart, 0.0d0, 1.0d0) call matches `sample_volume_single(zstart, s_inner,
s_outer) via the INTERFACE sample. sample_volume_single clamps s_lo = max(s_inner, s_min)`
with s_min=0.01d0, so even when called with s_inner=0.0 it will not generate particles with
s<0.01.

src/simple_main.f90[425-435]
src/samplers.f90[13-21]
src/samplers.f90[87-108]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`sample_particles()` now routes `startmode=1, num_surf=0` to `sample(zstart, 0.0d0, 1.0d0)`, which dispatches to `sample_volume_single`. That sampler clamps the lower bound to `s_min=0.01d0`, so particles are never started near the axis.

### Issue Context
PR description indicates uniform sampling over s∈[0,1]. Current implementation excludes [0,0.01), which can change classification plots/diagnostics.

### Fix Focus Areas
- src/simple_main.f90[425-435]
- src/samplers.f90[87-108]

### Suggested approach
- If axis singularity avoidance is still needed, reduce clamp to something much smaller (e.g., 1e-8) or make `s_min` configurable.
- Alternatively, only clamp when `s_inner &lt; 0` or when coordinate transforms truly fail at/near the axis, rather than always excluding the region.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. Golden tests miss bminmax 🐞 Bug ⚙ Maintainability
Description
write_output now writes bminmax.dat for classifier runs, but the golden-record comparison list for
classifier cases does not include it. Regressions in bmin/bmax table generation will not be detected
by CI golden comparisons.
Code

test/golden_record/compare_golden_results.sh[R126-130]

+        # Check if this is a classifier case with multiple files
+        elif [ "$CASE" = "classifier_fast" ] || [ "$CASE" = "classifier_combined" ]; then
            # List of files to compare for classifier_fast (excluding simple.in and wout.nc)
            # Note: fort.* files are excluded due to non-deterministic ordering in parallel execution
            CLASSIFIER_FILES="avg_inverse_t_lost.dat class_parts.dat confined_fraction.dat healaxis.dat start.dat times_lost.dat"
Evidence
SIMPLE writes bminmax.dat whenever classification is active (ntcut>0 or class_plot), but
compare_golden_results.sh’s CLASSIFIER_FILES omits bminmax.dat for
classifier_fast/classifier_combined, so the file is not compared between reference and current runs.

src/simple_main.f90[861-882]
test/golden_record/compare_golden_results.sh[126-136]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Golden-record comparisons for classifier cases do not include `bminmax.dat`, even though it is now emitted by `write_output` when classification is active.

### Issue Context
Missing comparisons mean future changes to `get_bminmax` or table formatting won’t be caught.

### Fix Focus Areas
- test/golden_record/compare_golden_results.sh[126-136]

### Suggested approach
- Add `bminmax.dat` to `CLASSIFIER_FILES`.
- (Optional) Extend golden_record_sanity tests to include this file in at least one multi-file compare invocation.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +10 to +22
hs = 1.0 / ns
pmin = 0.0
pmax = np.max(perp_inv)
hp = (pmax - pmin) / nperp

prompt = np.zeros((nperp, ns))
regular = np.zeros((nperp, ns))
stochastic = np.zeros((nperp, ns))

for ipart in range(len(s)):
i = min(ns, max(1, int(np.ceil(s[ipart] / hs)))) - 1
k = min(nperp, max(1, int(np.ceil(perp_inv[ipart] / hp)))) - 1

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Plot binning normalization bug 🐞 Bug ✓ Correctness

bin_classification() normalizes perp_inv by its sample max, but doplot_inner() overlays bmin/bmax
boundary curves in a different normalization, so the heatmap bins and boundary curves are on
inconsistent y-scales and the plot can be physically misleading. If all perp_inv values are
identical (e.g., all 0), hp becomes 0 and the script divides by zero when computing bin indices.
Agent Prompt
### Issue description
`examples/classification/plot_classification.py` bins `perp_inv` using `pmax = np.max(perp_inv)`, but overlays curves computed from `bminmax.dat` that assume normalization by the global minimum B (`bmin_global`). This makes the plotted heatmap and boundary curves inconsistent, and can also crash when `pmax == pmin` (hp=0).

### Issue Context
The boundary curves use `bmin_global / bminmax[:, 1|2]`, which is consistent with plotting `perp_inv * bmin_global` (dimensionless, ideally in [0,1]). The binning should use the same quantity, and must guard against `hp == 0`.

### Fix Focus Areas
- examples/classification/plot_classification.py[5-31]
- examples/classification/plot_classification.py[33-56]

### Suggested approach
- Load `bminmax` first in `main()` and compute `bmin_global`.
- Define `perp_norm = perp_inv * bmin_global` and clip to `[0, 1]` (or set explicit bounds).
- Update `bin_classification()` to bin `perp_norm` in a fixed range `[0,1]` (so `hp = 1.0/nperp`), avoiding `pmax=np.max(...)`.
- Add a defensive guard so `hp` cannot be 0 (even if you keep data-driven pmax).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines 771 to 775
!$omp critical
bmod = compute_bmod(z(1:3))
if (num_surf > 1) then
if (num_surf /= 1) then
call get_bminmax(z(1), bmin, bmax)
end if
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Classifier skips bminmax for num_surf=0 🐞 Bug ✓ Correctness

compute_pitch_angle_params() now calls get_bminmax when num_surf/=1, but
trace_orbit_with_classifiers() still only calls get_bminmax when num_surf>1. If num_surf remains 0
(e.g., startmode=2 reading a start.dat with varying radii), classification can use stale bmin/bmax
and compute incorrect passing/trap parameters.
Agent Prompt
### Issue description
`src/simple_main.f90` updated the bmin/bmax lookup guard to `num_surf /= 1`, but `src/classification.f90` still uses `num_surf > 1` before calling `get_bminmax`. This can skip per-radius bmin/bmax updates when `num_surf==0`.

### Issue Context
Classifier computations (`passing`, `trap_par`, `perp_inv`) depend on accurate `bmin/bmax`. For `num_surf=0` (volume / mixed-radius starts), bmin/bmax should be obtained via `get_bminmax`.

### Fix Focus Areas
- src/simple_main.f90[762-777]
- src/classification.f90[148-155]

### Suggested approach
- Change the guard in `trace_orbit_with_classifiers` from `if(num_surf > 1)` to `if (num_surf /= 1)` (matching simple_main), or explicitly handle `num_surf==0` as a case that must call `get_bminmax`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

@krystophny krystophny changed the title Restore volume sampling and bminmax output for classification Fix classification regression and reproduce JPP 2020 paper results Mar 20, 2026
The error stop treating fast_class and tcut as mutually exclusive was
wrong: fast_class enables J_parallel + topological classifiers while
tcut enables the fractal dimension classifier. They are complementary.

Also fix the classification example referencing QA instead of QH VMEC
file (mismatched with the Makefile download target), and add a combined
classifier golden record test exercising both features together.
The samplers-via-startmodes refactor (08cf86f, March 2025) broke
classification diagnostics by removing volume sampling and bminmax
output. This made it impossible to reproduce Figure 12(a-b) from the
2020 JPP paper, as reported by Rohan Ramasamy.

Fixes:
- startmode=1 with num_surf=0 now triggers volume sampling (uniform
  random s in [0,1]) instead of single-surface field line sampling
- compute_pitch_angle_params calls get_bminmax for num_surf=0 (not
  just num_surf>1), giving per-particle bmin/bmax at varying radii
- write_output generates bminmax.dat (101 rows) when classification
  is active, needed by the plot script
- plot_classification.py reads class_parts.dat directly (all 6
  columns) and bins particles in Python, replacing the standalone
  Fortran postprocessor that read only 5 of 6 columns
- Fixed hardcoded bminmax[250,1] index that assumed 1001 rows
- Deleted postprocess_class.f90, simplified Makefile
The binned array has shape (nperp, ns) which imshow displays correctly
as y=J_perp, x=s. The erroneous .T transposed this to (ns, nperp),
swapping the axes. Verified pixel-identical output against the original
Fortran postprocessor.
Add single-surface (s=0.3, s=0.6) and volume input files for the QH
configuration, a run script, and a plotting script that generates
Figure 6 style loss-time vs trapping-parameter plots, Figure 8 style
orbit classification in (theta, v_par/v) space, volume classification
in (s, J_perp) space, and Table 1 style regular-fraction statistics.

Also fix plot_orbit_chartmap_comparison.py to handle missing netCDF4
gracefully instead of crashing the test.
…example

The fast_class feature incorrectly propagated the J_parallel classifier
completion status (ierr_cot) to the integration error flag (ierr), causing
ALL classified orbits to be marked as lost. Regular orbits classified by
J_parallel should stop tracing early and be counted as confined; only
chaotic orbits should continue tracing to the end. This produced wildly
wrong confined fractions (e.g., fc=0.57 instead of fc=0.94 for QI s=0.3).

The fix sets regular=.True. for J_parallel-classified regular orbits
(when class_plot is off), which correctly: (a) stops integration early,
(b) counts the orbit in confpart at all remaining time steps, and
(c) sets times_lost to trace_time (confined). When class_plot is on,
the old behavior is preserved so classification output is written at ntcut.

The classification example is extended to reproduce all results from the
JPP 2020 paper (Albert, Kasilov, Kernbichler) using the actual paper
equilibria: QI (Drevlak 2014), QH (Drevlak 2018), QA (Henneberg 2019).

Input configs for all cases (Figures 5-8, Table 1, volume classification)
are provided under configs/. The plot script generates publication-quality
figures matching the paper. Confined fractions validated against the paper
at 1000 particles: QI s=0.3 fc=0.938, QH s=0.3 fc=0.993, QA s=0.6 fc=0.697.
Include the three VMEC equilibrium files used in the JPP 2020 paper:
- QI (Drevlak 2014): wout_23_1900_fix_bdry.nc
- QH (Drevlak 2018): wout_qh_8_7.nc
- QA (Henneberg 2019): wout_henneberg_qa.nc

Include reference output plots (100k particles, trace_time=1s):
- fig5_losses_qi.pdf, fig6_losses_qh.pdf, fig7_losses_qa.pdf
- fig8_classification.pdf, volume_classification.pdf

Update run_all.sh to use repo-local data/ directory instead of
hardcoded paths, so Rohan (or anyone) can reproduce by running:
  make && ./examples/classification/run_all.sh
- Remove wout .nc files and output PDFs from SIMPLE repo (belong in
  the private proxima-simple-classification repo)
- run_all.sh now auto-downloads wout files from the proxima repo
- Add qi_volume.in and qa_volume.in for volume classification of all
  3 configs
- Fix Fig 8 plot to use dense colored grid (imshow) matching the
  paper's "brazilian flag" style instead of sparse scatter markers
- Update plot_paper_results.py to generate volume plots for all configs
Replace the sparse 100x100 dominant-category grid with a 50x50
Gaussian-smoothed fractional composition approach that fills the
entire (theta/pi, v_par/v) plane with continuous colors -- the
"brazilian flag" look matching the JPP 2020 paper Figure 8.
Reduce overlay marker sizes for early-loss and chaotic particles.
Change num_surf > 1 to num_surf /= 1 in trace_orbit_with_classifiers,
matching the fix already applied to compute_pitch_angle_params. For
volume sampling (num_surf=0), each particle now gets bmin/bmax from
its actual flux surface rather than using fixed values from sbeg.
Use 4 categories (passing/regular-trapped/chaotic/prompt-loss) instead
of 3 so the trapped-passing boundary (dashed line) is visible as the
gray-to-colored transition. Previously passing and regular-trapped
were both yellow, hiding the boundary inside a uniform region.
Extract init_bminmax_arrays from get_bminmax lazy initialization into
an explicit subroutine. Call it during startup while magfie is still
in VMEC mode, ensuring find_bminmax scans geometric (theta, phi)
angles rather than canonical coordinates.
Skip passing particles in the binning so the passing region is white.
The trapped-passing boundary (dashed line) now visibly traces the edge
of the colored data region.
@krystophny krystophny force-pushed the fix/classification-regression branch from a3210dd to cec6270 Compare March 27, 2026 17:44
@krystophny krystophny changed the base branch from feature/venv-setup to main March 27, 2026 17:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant